70 research outputs found

    Dynamic Generation of Intelligent Multimedia Presentations Through Semantic Inferencing

    Get PDF
    This paper first proposes a high-level architecture for semi-automatically generating multimedia presentations by combining semantic inferencing with multimedia presentation generation tools. It then describes a system, based on this architecture, which was developed as a service to run over OAI archives - but is applicable to any repositories containing mixed-media resources described using Dublin Core. By applying an iterative sequence of searches across the Dublin Core metadata, published by the OAI data providers, semantic relationships can be inferred between the mixed-media objects which are retrieved. Using predefined mapping rules, these semantic relationships are then mapped to spatial and temporal relationships between the objects. The spatial and temporal relationships are expressed within SMIL files which can be replayed as multimedia presentations. Our underlying hypothesis is that by using automated computer processing of metadata to organize and combine semantically-related objects within multimedia presentations, the system may be able to generate new knowledge by exposing previously unrecognized connections. In addition, the use of multilayered information-rich multimedia to present the results, enables faster and easier information browsing, analysis, interpretation and deduction by the end-user

    Video on the semantic web: experiences with media streams

    Get PDF
    In this paper, we report our experiences with the use of SemanticWeb technology for annotating digital video material.Web technology is used to transform a large, existing video ontology embedded in an annotation tool into a commonly accessible format. The recombination of existing video material is then used as an example application, in which the video metadata enables the retrieval of video footage based on both content descriptions and cinematographic concepts, such as establishing and reaction shots. The paper focuses on the practical issues of porting ontological information to the Semantic Web, the multimedia-specific issues of video annotation, and requirements for Semantic Web query and access patterns. It thereby explicitly aims at providing input to the two new W3C Semantic Web Working Groups (Best Practices and Deployment; Data Access)

    CHORUS Deliverable 4.5: Report of the 3rd CHORUS Conference

    Get PDF
    The third and last CHORUS conference on Multimedia Search Engines took place from the 26th to the 27th of May 2009 in Brussels, Belgium. About 100 participants from 15 European countries, the US, Japan and Australia learned about the latest developments in the domain. An exhibition of 13 stands presented 16 research projects currently ongoing around the world

    Discourse knowledge in device independent document formatting

    Get PDF
    Most document structures define layout structures which implicitly define semantic relationships between content elements. While document structures for text are well established (books, reports, papers etc.), models for time based documents such as multimedia and hypermedia are relatively new and lack established document structures. Traditional document description languages convey domain-dependent semantic relationships implicitly, using domain-independent mark-up for expressing layout. This works well for textual documents a,s for example, CSS and HTML demonstrate. True device independence, however, sometimes requires a change of document model to maintain the content semantics. To achieve this we need explicit information about the dis

    Requirements for practical multimedia annotation

    Get PDF
    Applications that use annotated multimedia assets need to be able to process all the annotations about a specific media asset. At first sight, this seems almost trivial, but annotations are needed for different levels of description, these need to be related to each other in the appropriate way and, in particular on the Semantic Web, annotations may not all be stored in the same place. We distinguish between technical descriptions of a media asset from content-level descriptions. At both levels, the annotations needed in a single application may come from different vocabularies. In addition, the instantiated values for a term used from an ontology also need to be specified. We present a number of existing vocabularies related to multimedia

    Application-specific constraints for multimedia presentation generation

    Get PDF
    The paper describes the advantages of the use of constraint logic programming to articulate transformation rules for multimedia presentation in combination with efficient constraint solving techniques. It demonstrates the need for two different types of constraints. Quantitative constraints are needed to verify whether the final form presentation meets all the numeric constraints that are required by the environment. Qualitative constraints are needed to facilitate high-level reasoning and presentation encoding. While the quantitative constraints can be handled by off-the-shelf constraint solvers, the qualitative constraints needed are specific to the multimedia domain and need to be defined explicitly

    Application-specific constraints for multimedia presentation generation

    Get PDF
    A multimedia presentation can be viewed as a collection of multimedia items (such as image, text, video and audio), along with detailed information that describes the spatial and temporal placement of the items as part of the presentation. Manual multimedia authoring involves explicitly stating the placement of each media item in the spatial and temporal dimensions. The drawback of this approach is that resulting presentations are hard to adapt to different target platforms, network resources, and user preferences. An approach to solving this problem is to abstract from t

    Towards ontology-driven discourse: from semantic graphs to multimedia presentations

    Get PDF
    Traditionally, research in applying Semantic Web technology to multimedia information systems has focused on using annotations and ontologies to improve the retrieval process. This paper concentrates on improving the presentation of the retrieval results. First, our approach uses ontological domain knowledge to select and organize the content relevant to the topic the user is interested in. Domain ontologies are valuable in the presentation generation process, because effective presentations are those that succeed in conveying the relevant domain semantics to the user. Explicit discourse and narrative knowledge allows selection of appropriate presentation genres and creation of narrative structures, which are used for conveying these domain relations. In addition, knowledge of graphic design and media characteristics is essential to transform abstract presentation structures in real multimedia presentations. Design knowledge determines how the semantics and presentation structure are expressed in the multimedia presentation. In traditional Web environments, this type of design knowledge remains implicit, hidden in style sheets and other document transformation code. Our second use of Semantic Web technology is to model design knowledge explicitly, and to let it drive the transformations needed to turn annotated media items into structured presentations

    Towards a multimedia formatting vocabulary

    Get PDF
    Time-based, media-centric Web presentations can be described declaratively in the XML world through the development of languages such as SMIL. It is difficult, however, to fully integrate them in a complete document transformation processing chain. In order to achieve the desired processing of data-driven, time-based, media-centric presentations, the text-flow based formatting vocabularies used by style languages such as XSL, CSS and DSSSL need to be extended. The paper presents a selection of use cases which are used to derive a list of requirements for a multimedia style and transformation formatting vocabulary. The boundaries of applicability of existing text-based formatting models for media-centric transformations are analyzed. The paper then discusses the advantages and disadvantages of a fully-fledged time-based multimedia formatting model. Finally, the discussion is illustrated by describing the key properties of the example multimedia formatting vocabulary currently implemented in the back-end of our Cuypers multimedia transformation engine

    Towards a multimedia formatting vocabulary

    Get PDF
    Time-based, media-centric Web presentations can be described declaratively in the XML world through the development of languages such as SMIL. It is difficult, however, to fully integrate them in a complete document transformation processing chain. In order to achieve the desired processing of data-driven, time-based, media-centric presentations, the text-flow based formatting vocabularies used by style languages such as XSL, CSS and DSSSL need to be extended. The paper presents a selection of use cases which are used to derive a list of requirements for a multimedia style and transformation formatting vocabulary. The boundaries of applicability of existing text-based formatting models for media-centric transformations are analyzed. The paper then discusses the advantages and disadvantages of a fully-fledged time-based multimedia formatting model. Finally, the discussion is illustrated by describing the key properties of the example multimedia formatting vocabulary currently implemented in the back-end of our Cuypers multimedia transformation engine
    • …
    corecore